|
In statistics, kernel Fisher discriminant analysis (KFD), also known as generalized discriminant analysis and kernel discriminant analysis, is a kernelized version of linear discriminant analysis. It is named after Ronald Fisher. Using the kernel trick, LDA is implicitly performed in a new feature space, which allows non-linear mappings to be learned. ==Linear discriminant analysis== Intuitively, the idea of LDA is to find a projection where class separation is maximized. Given two sets of labeled data, and , define the class means and to be : where is the number of examples of class . The goal of linear discriminant analysis is to give a large separation of the class means while also keeping the in-class variance small. This is formulated as maximizing : where is the between-class covariance matrix and is the total within-class covariance matrix: : Differentiating with respect to , setting equal to zero, and rearranging gives : Since we only care about the direction of and has the same direction as , can be replaced by and we can drop the scalars and to give : 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Kernel Fisher discriminant analysis」の詳細全文を読む スポンサード リンク
|